166 research outputs found
A penalty method for PDE-constrained optimization in inverse problems
Many inverse and parameter estimation problems can be written as
PDE-constrained optimization problems. The goal, then, is to infer the
parameters, typically coefficients of the PDE, from partial measurements of the
solutions of the PDE for several right-hand-sides. Such PDE-constrained
problems can be solved by finding a stationary point of the Lagrangian, which
entails simultaneously updating the paramaters and the (adjoint) state
variables. For large-scale problems, such an all-at-once approach is not
feasible as it requires storing all the state variables. In this case one
usually resorts to a reduced approach where the constraints are explicitly
eliminated (at each iteration) by solving the PDEs. These two approaches, and
variations thereof, are the main workhorses for solving PDE-constrained
optimization problems arising from inverse problems. In this paper, we present
an alternative method that aims to combine the advantages of both approaches.
Our method is based on a quadratic penalty formulation of the constrained
optimization problem. By eliminating the state variable, we develop an
efficient algorithm that has roughly the same computational complexity as the
conventional reduced approach while exploiting a larger search space. Numerical
results show that this method indeed reduces some of the non-linearity of the
problem and is less sensitive the initial iterate
Sparse seismic imaging using variable projection
We consider an important class of signal processing problems where the signal
of interest is known to be sparse, and can be recovered from data given
auxiliary information about how the data was generated. For example, a sparse
Green's function may be recovered from seismic experimental data using sparsity
optimization when the source signature is known. Unfortunately, in practice
this information is often missing, and must be recovered from data along with
the signal using deconvolution techniques.
In this paper, we present a novel methodology to simultaneously solve for the
sparse signal and auxiliary parameters using a recently proposed variable
projection technique. Our main contribution is to combine variable projection
with sparsity promoting optimization, obtaining an efficient algorithm for
large-scale sparse deconvolution problems. We demonstrate the algorithm on a
seismic imaging example.Comment: 5 pages, 4 figure
Automatic alignment for three-dimensional tomographic reconstruction
In tomographic reconstruction, the goal is to reconstruct an unknown object
from a collection of line integrals. Given a complete sampling of such line
integrals for various angles and directions, explicit inverse formulas exist to
reconstruct the object. Given noisy and incomplete measurements, the inverse
problem is typically solved through a regularized least-squares approach. A
challenge for both approaches is that in practice the exact directions and
offsets of the x-rays are only known approximately due to, e.g. calibration
errors. Such errors lead to artifacts in the reconstructed image. In the case
of sufficient sampling and geometrically simple misalignment, the measurements
can be corrected by exploiting so-called consistency conditions. In other
cases, such conditions may not apply and we have to solve an additional inverse
problem to retrieve the angles and shifts. In this paper we propose a general
algorithmic framework for retrieving these parameters in conjunction with an
algebraic reconstruction technique. The proposed approach is illustrated by
numerical examples for both simulated data and an electron tomography dataset
Relaxed regularization for linear inverse problems
We consider regularized least-squares problems of the form . Recently, Zheng et al.,
2019, proposed an algorithm called Sparse Relaxed Regularized Regression (SR3)
that employs a splitting strategy by introducing an auxiliary variable and
solves . By minimizing out the variable we obtain an
equivalent system . In our work we view the SR3 method as a
way to approximately solve the regularized problem. We analyze the conditioning
of the relaxed problem in general and give an expression for the SVD of
as a function of .
Furthermore, we relate the Pareto curve of the original problem to the
relaxed problem and we quantify the error incurred by relaxation in terms of
. Finally, we propose an efficient iterative method for solving the
relaxed problem with inexact inner iterations. Numerical examples illustrate
the approach.Comment: 25 pages, 14 figures, submitted to SIAM Journal for Scientific
Computing special issue Sixteenth Copper Mountain Conference on Iterative
Method
A data-driven approach to solving a 1D inverse scattering problem
In this paper, we extend a recently proposed approach for inverse scattering with Neumann boundary conditions [Druskin et al., Inverse Probl. 37, 075003 (2021)] to the 1D Schrödinger equation with impedance (Robin) boundary conditions. This method approaches inverse scattering in two steps: first, to extract a reduced order model (ROM) directly from the data and, subsequently, to extract the scattering potential from the ROM. We also propose a novel data-assimilation (DA) inversion method based on the ROM approach, thereby avoiding the need for a Lanczos-orthogonalization (LO) step. Furthermore, we present a detailed numerical study and A comparison of the accuracy and stability of the DA and LO methods
Adaptive Grid Refinement for Discrete Tomography
Discrete tomography has proven itself as a powerful approach to image reconstruction from limited data. In recent years, algebraic reconstruction methods have been applied successfully to a range of experimental data sets. However, the computational cost of such reconstruction techniques currently prevents routine application to large data-sets. In this paper we investigate the use of adaptive refinement on QuadTree grids to reduce the number of pixels (or voxels) needed to represent an image. Such locally refined grids match well with the domain of discrete tomography as they are optimally suited for representing images containing large homogeneous regions. Reducing the number of pixels ultimately promises a reduct
- …